Developing a Thin and High Performance Implementation of Message Passing Interface

نویسندگان

  • Theewara Vorakosit
  • Putchong Uthayopas
چکیده

Communication library is a substantially important part for the development of the parallel applications on PC clusters. MPI is currently the most important messaging passing standard being used worldwide. Although powerful, MPI is very complex and require a certain amount of effort to learn. In fact, only a basic set of MPI functions is enough to develop a large class of parallel applications. There is a need to explore other aspect of massing passing programming which is inadequately explored. These include issues such as fault tolerance programming, debugging, performance optimization, and usability of programming environment. In this paper, the work on implementing of a compact but high performance MPI implementation called MPITH is presented. MPITH is a communication library for PC cluster that conforms to a subset of most used functions in MPI 1.1 standard. This paper discusses the architecture, design, and implementation of MPITH along with the results of the comparison of MPITH performance with MPICH and LAM on PC cluster. The experimental results show that MPITH can deliver a comparable performance to both MPICH and LAM Introduction and Related Works Commodity PC cluster had proven to be a viable solution to provide very high computing power. Moreover, cluster systems can be used in many fields such as scientific computing, high performance web cluster, and high availability system. To use cluster with compute intensive programs, users need a parallel application that specially designed for the cluster. These applications usually communicate using message passing through communication library. Hence, the communication library plays an important role in the development of parallel applications for cluster. The 1 This research is supported in part by KURDI grant SRU and Advanced Micro Devices Far East Inc. ANSCSE6 2002 -D35performance of a communication library depends mostly on internal algorithms such as data buffering, communication protocol, and communication algorithm. Most communication-related algorithms used in library involve the collective operation. Communication model such as LogP[5] can be used to create a communication schedule. The optimization of communication schedule based on LogP can be found in [8]. A communication library not only provides an efficient communication, but also provides other useful features such as data manipulation and group communication. Data manipulation helps programmer to compute sum, maximum, and minimum value. Group communication is a mechanism to communicate between more than 2 processes in each time. To make parallel program portable, many standard library interfaces are invented such as PVM [6, 12] and MPI [10, 11]. MPI is currently the most important standard. MPI standard define both syntax and semantic of MPI functions. Most used communication libraries that conform to MPI standard are MPICH [7] and LAM [3]. MPICH is developed by Argon National Laboratory. The current version of MPICH is 1.2.3. MPICH supports MPI standard 1.2 and part of MPI 2.0. It is freely distributed in open source license. The core of MPICH is written in C language. However, MPICH also includes C++ interface implemented by University of Notre Dame as well. LAM is developed by University of Notre Dame. The current version is 6.5.6. LAM is developed in C++ but provides C interface to programmer. In MPI standard consists of a large and complex sets of functions. For example, MPICH supports as many as 280 functions. This complexity makes MPI difficult to learn. In addition, MPI implementation that supports all these functions is huge, complex, slow, and potentially less reliable. One important question to ask is whether the programmers of parallel program need this complexity or not. Therefore, the study has been conducted to see how many MPI functions are used by many popular packages such as PETSc [2], MPI Blacs [4], MPI Povray, HPL benchmark and PGAPack [9]. Table 1 and Figure 1 show the number of functions used in each packages. Table 1 Used MPI Function Application MPI Function Count PETSc 52 MPI Blacs 38 MPI Povray 11 HPL 21 PGAPack 14

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel Object-oriented Design in Fortran for Beam Dynamics Simulations∗

In this paper we describe an object-oriented software design approach, using Fortran 90 (F90) and the Message Passing Interface (MPI), for modeling the transport of intense charged particle beams. The object-oriented approach improves the maintainability, resuability, and extensibility of the software, while the use of explicit message passing provides the freedom necessary to achieve high perf...

متن کامل

GOOMPI: A Generic Object Oriented Message Passing Interface

This paper discusses the application of object-oriented and generic programming techniques in high performance parallel computing, then presents a new message-passing interface based on object-oriented and generic programming techniques — GOOMPI, describes its design and implementation issues, shows its values in designing and implementing parallel algorithms or applications based on the messag...

متن کامل

Analysis of Implementation Options for MPI-2 One-Sided

The Message Passing Interface provides an interface for onesided communication as part of the MPI-2 standard. The semantics specified by MPI-2 allow for a number of different implementation avenues, each with different performance characteristics. Within the context of Open MPI, a freely available high performance MPI implementation, we analyze a number of implementation possibilities, includin...

متن کامل

LogGP Quantified: The Case for MPI

LogGP is a simple parallel machine model that reflects the important parameters required to measure the real performance of parallel computers (Alexandrov et al., 1995). The message passing interface (MPI) standard provides new opportunities for developing high performance parallel and distributed applications. We use LogGP as a conceptual framework for evaluating the performance of MPI communi...

متن کامل

A High-Performance Active Digital Library

We describe Javaflow and Paraflow, the client and server parts of a digital library, providing high-performance data-retrieval and data-mining services, with emphasis on user interface as well as computing efficiency. Paraflow is a component model for high-performance computing, implemented as a thin layer on MPI (Message Passing Interface); it controls a heterogeneous metacomputer, allowing gr...

متن کامل

MPICH on the T3D: A Case Study of High Performance Message Passing

This paper describes the design, implementation and performance of a port of the Argonne National Laboratory/Mississippi State University MPICH implementation of the Message Passing Interface standard to the Cray T3D massively parallel processing system. A description of the factors influencing the design and the various stages of implementation are presented. Performance results revealing supe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002